Skip to content

Update dependency torch to >=2.10,<3 [SECURITY]#151

Open
renovate[bot] wants to merge 1 commit intomasterfrom
renovate/pypi-torch-vulnerability
Open

Update dependency torch to >=2.10,<3 [SECURITY]#151
renovate[bot] wants to merge 1 commit intomasterfrom
renovate/pypi-torch-vulnerability

Conversation

@renovate
Copy link
Contributor

@renovate renovate bot commented Jul 25, 2024

ℹ️ Note

This PR body was truncated due to platform limits.

This PR contains the following updates:

Package Change Age Confidence
torch >=1.2,<2.1>=2.10,<3 age confidence

GitHub Vulnerability Alerts

CVE-2024-31583

Pytorch before version v2.2.0 was discovered to contain a use-after-free vulnerability in torch/csrc/jit/mobile/interpreter.cpp.

CVE-2024-31580

PyTorch before v2.2.0 was discovered to contain a heap buffer overflow vulnerability in the component /runtime/vararg_functions.cpp. This vulnerability allows attackers to cause a Denial of Service (DoS) via a crafted input.

CVE-2025-2953

A vulnerability, which was classified as problematic, has been found in PyTorch 2.6.0+cu124. Affected by this issue is the function torch.mkldnn_max_pool2d. The manipulation leads to denial of service. An attack has to be approached locally. The exploit has been disclosed to the public and may be used.

CVE-2025-3730

A vulnerability, which was classified as problematic, was found in PyTorch 2.6.0. Affected is the function torch.nn.functional.ctc_loss of the file aten/src/ATen/native/LossCTC.cpp. The manipulation leads to denial of service. An attack has to be approached locally. The exploit has been disclosed to the public and may be used. The name of the patch is 46fc5d8e360127361211cb237d5f9eef0223e567. It is recommended to apply a patch to fix this issue.

CVE-2025-32434

Description

I found a Remote Command Execution (RCE) vulnerability in PyTorch. When loading model using torch.load with weights_only=True, it can still achieve RCE.

Background knowledge

https://github.com/pytorch/pytorch/security
As you can see, the PyTorch official documentation considers using torch.load() with weights_only=True to be safe.
image
Since everyone knows that weights_only=False is unsafe, so they will use the weights_only=True to mitigate the seucirty issue.
But now, I just proved that even if you use weights_only=True, it can still achieve RCE.

Credit

This vulnerability was found by Ji'an Zhou.


Release Notes

pytorch/pytorch (torch)

v2.10.0: PyTorch 2.10.0 Release

Compare Source

PyTorch 2.10.0 Release Notes
Highlights
Python 3.14 support for torch.compile(). Python 3.14t (freethreaded build) is experimentally supported as well.
Reduced kernel launch overhead with combo-kernels horizontal fusion in torchinductor
A new varlen_attn() op providing support for ragged and packed sequences
Efficient eigenvalue decompositions with DnXgeev
torch.compile() now respects use_deterministic_mode
DebugMode for tracking dispatched calls and debugging numerical divergence - This makes it simpler to track down subtle numerical bugs.
Intel GPUs support: Expand PyTorch support to the latest Panther Lake on Windows and Linux by enabling FP8 (core ops and scaled matmul) and complex MatMul support, and extending SYCL support in the C++ Extension API for Windows custom ops.

For more details about these highlighted features, you can look at the release blogpost. Below are the full release notes for this release.

Backwards Incompatible Changes
Dataloader Frontend
  • Removed unused data_source argument from Sampler (#​163134). This is a no-op, unless you have a custom sampler that uses this argument. Please update your custom sampler accordingly.
  • Removed deprecated imports for torch.utils.data.datapipes.iter.grouping (#​163438). from torch.utils.data.datapipes.iter.grouping import SHARDING_PRIORITIES, ShardingFilterIterDataPipe is no longer supported. Please import from torch.utils.data.datapipes.iter.sharding instead.
torch.nn
  • Remove Nested Jagged Tensor support from nn.attention.flex_attention (#​161734)
ONNX
  • fallback=False is now the default in torch.onnx.export (#​162726)
  • The exporter now uses the dynamo=True option without fallback. This is the recommended way to use the ONNX exporter. To preserve 2.9 behavior, manually set fallback=True in the torch.onnx.export call.
Release Engineering
  • Rename pytorch-triton package to triton (#​169888)
Deprecations
Distributed
  • DeviceMesh
    • Added a warning for slicing flattened dim from root mesh and types for _get_slice_mesh_layout (#​164993)

We decided to deprecate an existing behavior which goes against the PyTorch design principle (explicit over implicit) for device mesh slicing of flattened dim.

Version <2.9
import torch
from torch.distributed.device_mesh import

device_type = (
    acc.type
    if (acc := torch.accelerator.current_accelerator(check_available=True))
    else "cpu"
)
mesh_shape = (2, 2, 2)
mesh_3d = init_device_mesh(
    device_type, mesh_shape, mesh_dim_names=("dp", "cp", "tp")
)

mesh_3d["dp", "cp"]._flatten()
mesh_3["dp_cp"]  # This comes with no warning
Version >=2.10
import torch
from torch.distributed.device_mesh import

device_type = (
    acc.type
    if (acc := torch.accelerator.current_accelerator(check_available=True))
    else "cpu"
)
mesh_shape = (2, 2, 2)
mesh_3d = init_device_mesh(
    device_type, mesh_shape, mesh_dim_names=("dp", "cp", "tp")
)

mesh_3d["dp", "cp"]._flatten()
mesh_3["dp_cp"]  # This will come with a warning because it implicitly change the state of the original mesh. We will eventually remove this behavior in future release. User should do the bookkeeping of flattened mesh explicitly.
Ahead-Of-Time Inductor (AOTI)
  • Move from/to to torch::stable::detail (#​164956)
JIT
  • torch.jit is not guaranteed to work in Python 3.14. Deprecation warnings have been added to user-facing torch.jit API (#​167669).

torch.jit should be replaced with torch.compile or torch.export.

ONNX
  • The dynamic_axes option in torch.onnx.export is deprecated (#​165769)

Users should supply the dynamic_shapes argument instead. See https://docs.pytorch.org/docs/stable/export.html#expressing-dynamism for more documentation.

Profiler
  • Deprecate export_memory_timeline method (#​168036)

The export_memory_timeline method in torch.profiler is being deprecated in favor of the newer memory snapshot API (torch.cuda.memory._record_memory_history and torch.cuda.memory._export_memory_snapshot). This change adds the deprecated decorator from typing_extensions and updates the docstring to guide users to the recommended alternative.

New Features
Autograd
  • Allow setting grad_dtype on leaf tensors (#​164751)
  • Add Default Autograd Fallback for PrivateUse1 in PyTorch (#​165315)
  • Add API to annotate disjoint backward for use with torch.utils.checkpoint.checkpoint (#​166536)
Complex Frontend
Composability
cuDNN
  • BFloat16 support added to cuDNN RNN (#​164411)
  • [cuDNN][submodule] Upgrade to cuDNN frontend 1.16.1 (#​170591)
Distributed
  • LocalTensor:

    • LocalTensor is a powerful debugging and simulation tool in PyTorch's distributed tensor ecosystem. It allows you to simulate distributed tensor computations across multiple SPMD (Single Program, Multiple Data) ranks on a single process. This is incredibly valuable for: 1) debugging distributed code without spinning up multiple processes; 2) understanding DTensor behavior by inspecting per-rank tensor states; 3) testing DTensor operations with uneven sharding across ranks; 4) rapid prototyping of distributed algorithms. Note that LocalTensor is designed for debugging purposes only. It has significant overhead and is not suitable for production distributed training.
    • LocalTensor is a torch.Tensor subclass that internally holds a mapping from rank IDs to local tensor shards. When you perform a PyTorch operation on a LocalTensor, the operation is applied independently to each local shard, mimicking distributed computation (LocalTensor simulates collective operations locally without actual network communication.). LocalTensorMode is the context manager that enables LocalTensor dispatch. It intercepts PyTorch operations and routes them appropriately. The @maybe_run_for_local_tensor decorator is essential for handling rank-specific logic when implementing distributed code.
    • To get started with LocalTensor, users import from torch.distributed._local_tensor, initialize a fake process group, and wrap their distributed code in a LocalTensorMode context. Within this context, DTensor operations automatically produce LocalTensors.
    • PRs: (#​164537, #​166595, #​168110,#​168314,#​169088,#​169734)
  • c10d:

    • New shrink_group implementation to expose ncclCommShrink API (#​164518)
Dynamo
  • torch.compile now fully works in Python 3.14 (#​167384)
  • Add option to error or disable applying side effects (#​167239)
  • Config flag (skip_fwd_side_effects_in_bwd_under_checkpoint) to allow eager and compile activation-checkpointing divergence for side-effects (#​165775)
  • torch._higher_order_ops.print for enabling printing without graph breaks or reordering (#​167571)
FX
  • Added node metadata annotation API

  • Disable preservation of node metadata when enable=False (#​164772)

  • Annotation should be mapped across submod (#​165202)

  • Annotate bw nodes before eliminate dead code (#​165782)

  • Add logging for debugging annotation (#​165797)

  • Override metadata on regenerated node in functional mode (#​166200)

  • Skip copying custom meta for gradient accumulation nodes; tag with is_gradient_acc=True (#​167572)

  • Add metadata hook for all nodes created in runtime_assert pass (#​169497)

  • Update gm.print_readable to include Annotation (#​165397)

  • Add annotation to assertion nodes in export (#​167171)

  • Add debug mode to print meta in fx graphs (#​165874)

Inductor
Ahead-Of-Time Inductor (AOTI)
  • Integrate AOTI as a backend. (#​167338)
  • Add AOTI mingw cross compilation for Windows. (#​163188)
MPS
torch.nn
ONNX
  • A new testing module torch.onnx.testing with a testing utility assert_onnx_program (#​162495)
Profiler
Quantization
  • Add _scaled_mm_v2 API (#​164141)

  • Add scaled_grouped_mm_v2 and python API (#​165154)

  • Add embedding_bag_byte_prepack_with_rowwise_min_max and embedding_bag_{2/4}bit_prepack_with_rowwise_min_max (#​162924)

  • Add MXFP4 support for _scaled_grouped_mm_v2 via. FBGEMM kernels (#​166530)

Release Engineering
ROCm
  • Enable grouped GEMM via regular GEMM fallback (#​162419)
  • Enable grouped GEMM via CK (#​166334, #​167403)
  • Enable ATen GEMM overload for FP32 output from FP16/BF16 inputs (#​162600)
  • Support torch.cuda._compile_kernel (#​162510)
  • Enhanced Windows support
  • load_inline (#​162577)
  • Enable AOTriton runtime compile (#​165538)
  • AOTriton scaled_dot_product_attention (#​162330)
  • Add gfx1150 gfx1151 to hipblaslt-supported GEMM lists (#​164744)
  • Add scaled_mm v2 support. (#​165528)
  • Add torch.version.rocm, distinct from torch.version.hip (#​168097)
XPU
  • Support ATen operators scaled_mm and scaled_mm_v2 for Intel GPU (#​166056)
  • Support ATen operator _weight_int8pack_mm for Intel GPU (#​160938)
  • Extend SYCL support in PyTorch CPP Extension API to allow users to implement new custom operators on Windows (#​162579)
  • Add API torch.xpu.get_per_process_memory_fraction for Intel GPU (#​165511)
  • Add API torch.xpu.set_per_process_memory_fraction for Intel GPU (#​165510)
  • Add API torch.xpu.is_tf32_supported for Intel GPU (#​163141)
  • Add API torch.xpu.can_device_access_peer for Intel GPU (#​162705)
  • Add API torch.accelerator.get_memory_info for Intel GPU (#​162564)
Improvements
Build Frontend
Composability
  • If you are using the torch.compile(backend="aot_eager") backend, it should now give bitwise equivalent results in eager. Previously it sometimes would not due to extra compile-only decompositions running (#​165910)
  • Some dynamic shape errors were changed to recommend using torch._check over torch._check_is_size (#​164889,
  • Some unbacked (dynamic shape) improvements (#​162652, #​169612)
  • Some bugfixes for symbolic float handling in compile (#​166573, #​162788)
C++ Frontend
CUDA
  • Make torch.cuda.rng_set_state and torch.cuda.rng_get_state work in CUDA graph capture. (#​162505)
  • Enable templated kernels (#​162875)
  • Enable pre-compiled kernels (#​162972)
  • Add CUDA headers automatically (#​162634)
  • Remove outdated header_code argument (#​163165)
  • Prevent copies of std::vector in CUDA ForeachOps (#​163416)
  • Implement cuda-python CUDA stream protocol (#​163614)
  • Remove outdated checks and docs for cuBLAS determinism (#​161749)
  • Cleanup old workaround code in launch_logcumsumexp_cuda_kernel (#​164567)
  • Add a compile-time flag to trigger verbose logging for device-side asserts (#​166171)
  • Support SM 10.3 in custom CUTLASS matmuls (#​162956)
  • Enable CUTLASS matmuls on Thor (#​164836)
  • Add per_process_memory_fraction option to PYTORCH_CUDA_ALLOC_CONF (#​161035)
  • Support nested memory pools (#​168382)
  • Upgrade cuDNN to 9.15.1 for CUDA 13 builds (#​169412)
Distributed
Dynamo
  • Turn on capture_scalar_outputs and capture_dynamic_output_shape_ops when fullgraph=True (#​163121, #​163123)

  • Improved tracing for dict key hashing (#​169204)

  • Tracing support for torch.cuda.stream (#​166472)

  • Improved tracing of torch.autograd.Functions (#​166788)

  • Miscellaneous smaller tracing support additions:

  • Extend collections.defaultdict support with *args, **kwargs and custom default_factory (#​166793)

  • Support for bitwise xor (#​166065)

  • Support repr on user-defined objects (#​167372)

  • Support new typing union syntax X | Y (#​166599)

Export
  • Improved fake tensor leakage detection in export (#​163516)
  • Improved support for tensor subclasses (#​163770)
FX
  • Add tensor subclass printing support in fx/graph.py (#​164403)
  • Update Node.is_impure check if subgraph contains impure ops (#​166609, #​167443)
  • Explicitly remove call_mod_node_to_replace after inlining the submodule in const_fold._inline_module` (#​166871)
  • Add strict argument validation to Interpreter.boxed_run (#​166784)
  • Use stable topological sort in fuse_by_partitions (#​167397)
Inductor
  • Pruned failed compilations from Autotuning candidates (#​162673)
  • Extend triton_mm auto-tune options for HIM shapes (#​163273)
  • Various fixes for AOTI-FX backend
  • Solve for undefined symbols in dynamic input shapes (#​163044)
  • Support symbol and dynamic scalar graph inputs and outputs (#​163596)
  • Support unbacked symbol definitions (#​163729)
  • Generalize FloorDiv conversion to handle more complex launch grids. (#​163828)
  • Don't flatten constant args (#​166144)
  • Support SymInt placeholder(#​167757)
  • Support torch.cond (#​163234)
  • Add tanh, exp, and sigmoid activations for Cutlass backend. (#​162535) (#​162536)
  • Hardened the experimental horizontal fusion torch._inductor.config.combo_kernels (#​162442) (#​166274) (#​162759) (#​167781) (#​168127) (#​168946) (#​168109) (#​164918)
  • Enable TMA store for TMA matmul templates on Triton. (#​160480)
  • Add Blackwell GPU templates (persistent matmul, FP8 scaled persistent + TMA GEMMs, CuTeDSL grouped GEMM, FlexFlash forward, FlexAttention configs). (#​162916) (#​163147) (#​167340) (#​167040) (#​165760)
  • Support qconv_pointwise.tensor and qconv2d_pointwise.binary_tensor quantized operations. (#​166608)
  • Support out_dtype argument for matmul operations. (#​163393)
  • Add support for bound methods in pattern matcher. (#​167795)
  • Add way to register custom rules for graph partitioning. (#​166458) (#​163310)
  • Add codegen support for fast_tanhf on ROCm. (#​162052)
  • Support deepseek-style FP8 scaling in Inductor. (#​164404)
  • Enable int64 indexing in convolution and matmul templates. (#​162506)
  • Add SDPA patterns for T5 variants when batch size is 1. (#​163252)
  • Add mechanism to get optimal autotune decision for FlexAttention. (#​165817)
  • Add fallback config fallback_embedding_bag_byte_unpack. (#​163803)
  • Expose config for FX bucket all_reduces. (#​167634)
  • Add in-kernel NaN check support. (#​166008)
  • Enable pad_mm and decompose_mm_pass pass on Intel GPU. (#​166618) (#​166613)
  • Improve CUDA support for int8pack_mm weight-only quantization pattern. (#​161680) (#​161848) (#​163461)
  • Improve heuristics for pointwise kernels on ROCm. (#​163197)
  • Enable mix-order reduction fusion earlier and allow fusing more nodes. (#​168209)
  • Make mix order reduction work with dynamic shapes (#​168117)
  • Better use of memory tracking (#​168121)
  • Turn on LOAF (for OSS) by default. (#​162030)
  • Log kernel autotuning results to CSV. (#​164191)
  • Add warning for CUDA graph re-recording from dynamic shapes. (#​162696)
  • Quiesce triton compile workers by default. (#​169485)
  • Support masked vectorization for tail loops with integer and bool datatypes. (#​165885)
  • Support tile-wise (1x128) FP8 scaling in Inductor. (#​165132)
  • Support fallback for all GEMM-like operations. (#​165755)
  • Enable Triton kernels with unbacked inputs. (#​164509)
  • Add AVX512-VNNI-based micro kernel for CPU GEMM template. (#​166846)
  • Support mixed dtype in native_layer_norm_backward meta function. (#​159830)
  • Add tech specs for MI350 GPU. (#​166576)
  • Add assume_32bit_indexing inductor config option. (#​167784)
  • Wire up mask_mod and blockmask to FlexFlash implementation. (#​166359)
  • More aggressive mix order reduction for better fusion. (#​166382)
  • Mix order reduction heuristics and tuning. (#​166585)
  • CuteDSL flat indexer needs to be colexigraphic in coordinate space (#​166657)
MPS
Nested Tensor (NJT)
  • Added NJT support for share_memory_ (#​162272)
torch.nn
  • Support batch size 0 for flash attention in scaled_dot_product_attention (#​166318)
  • Raise an error when using a sliced BlockMask in nn.functional.flex_attention (#​164702)
ONNX
  • Improved graph capture logic to preserve dynamic shapes and improve conversion success rate
  • Cover all FX passes into backed size oblivious (#​166151)
  • Set prefer_deferred_runtime_asserts_over_guards to True (#​165820)
  • Various warning and error messages improvements (#​162819, #​163074, #​166412, #​166558, #​166692)
  • Improved operator translation logic
  • Update weight tensor initialization in RMSNormalization (#​166550)
  • Support enable_gqa when dropout is non-zero (#​162771)
  • Implement tofile() in ONNX IR tensors for more efficient ONNX model serialization (#​165195)
Optimizer
  • Make Adam, AdamW work with nonzero-dim Tensor betas (#​149939)
Profiler
  • Expose Kineto event metadata in PyTorch Profiler events (#​161624)
  • Add user_metadata display to memory visualizer (#​165939)
  • Add warning for clearing profiler events at the end of each cycle (#​168066)
Python Frontend
  • Improved torch.library and custom ops to support view functions (#​164520)
  • Rework PyObject preservation to make it thread safe, significantly simpler and better handle some edge cases (#​167564)
  • Remove reference cycle in torch.save to improve memory usage (#​165204)
  • Add generator arg to rand*_like APIs (#​166160)
  • support negative index arguments to torch.take_along_dim negative (#​152161)
Quantization
  • half and bf16 support for fused_moving_avg_obs_fake_quant (#​162620, #​164175)
  • bf16 support for fake_quantize_learnable_per_channel_affine (#​165098)
  • bf16 support for backward of torch._fake_quantize_learnable_per_tensor_affine (#​165362)
  • Add NVFP4 two-level scaling to scaled_mm (#​165774)
  • Add support for fp8_input/fp8_weight/bf16_bias and bf16_output for fp8 qconv in CPU (#​167611)
  • Make the torch.float4_e2m1fn_x2 dtype support equality comparisons (#​169575)
  • add copy_ support for torch.float4_e2m1fn_x2 dtype (#​169595)
Release Engineering

Configuration

📅 Schedule: Branch creation - "" (UTC), Automerge - At any time (no schedule defined).

🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.

Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.

🔕 Ignore: Close this PR and you won't be reminded about this update again.


  • If you want to rebase/retry this PR, check this box

This PR was generated by Mend Renovate. View the repository job log.

@renovate renovate bot changed the title Update dependency torch to v2 [SECURITY] Update dependency torch to >=2.4,<3 [SECURITY] Jul 28, 2024
@renovate renovate bot force-pushed the renovate/pypi-torch-vulnerability branch from f59718f to 18d5c06 Compare July 28, 2024 17:35
@renovate renovate bot force-pushed the renovate/pypi-torch-vulnerability branch from 18d5c06 to ad272b1 Compare July 28, 2024 18:03
@renovate renovate bot changed the title Update dependency torch to >=2.4,<3 [SECURITY] Update dependency torch to v2 [SECURITY] Jul 28, 2024
@renovate renovate bot force-pushed the renovate/pypi-torch-vulnerability branch from ad272b1 to 62556a3 Compare October 9, 2024 08:15
@renovate renovate bot changed the title Update dependency torch to v2 [SECURITY] Update dependency torch to >=2.4,<3 [SECURITY] Oct 9, 2024
@renovate renovate bot changed the title Update dependency torch to >=2.4,<3 [SECURITY] Update dependency torch to v2 [SECURITY] Oct 9, 2024
@renovate renovate bot force-pushed the renovate/pypi-torch-vulnerability branch from 62556a3 to 88db8b7 Compare October 9, 2024 10:52
@renovate renovate bot force-pushed the renovate/pypi-torch-vulnerability branch from 88db8b7 to ccdfbfc Compare October 28, 2024 15:42
@renovate renovate bot changed the title Update dependency torch to v2 [SECURITY] Update dependency torch to >=2.5,<3 [SECURITY] Oct 28, 2024
@renovate renovate bot changed the title Update dependency torch to >=2.5,<3 [SECURITY] Update dependency torch to v2 [SECURITY] Oct 28, 2024
@renovate renovate bot force-pushed the renovate/pypi-torch-vulnerability branch from ccdfbfc to 4a1d578 Compare October 28, 2024 18:21
@renovate renovate bot force-pushed the renovate/pypi-torch-vulnerability branch from 4a1d578 to c95af66 Compare December 2, 2024 12:37
@renovate renovate bot changed the title Update dependency torch to v2 [SECURITY] Update dependency torch to >=2.5,<3 [SECURITY] Dec 2, 2024
@renovate renovate bot force-pushed the renovate/pypi-torch-vulnerability branch from c95af66 to 78e6998 Compare December 2, 2024 17:46
@renovate renovate bot changed the title Update dependency torch to >=2.5,<3 [SECURITY] Update dependency torch to v2 [SECURITY] Dec 2, 2024
@renovate renovate bot force-pushed the renovate/pypi-torch-vulnerability branch from 78e6998 to 60ddedd Compare December 17, 2024 20:53
@renovate renovate bot changed the title Update dependency torch to v2 [SECURITY] Update dependency torch to >=2.5,<3 [SECURITY] Dec 17, 2024
@renovate renovate bot changed the title Update dependency torch to >=2.5,<3 [SECURITY] Update dependency torch to v2 [SECURITY] Dec 17, 2024
@renovate renovate bot force-pushed the renovate/pypi-torch-vulnerability branch from 60ddedd to 22d99e7 Compare December 17, 2024 22:21
@renovate renovate bot changed the title Update dependency torch to >=2.9,<3 [SECURITY] Update dependency torch to v2 [SECURITY] Dec 30, 2025
@renovate renovate bot force-pushed the renovate/pypi-torch-vulnerability branch from f846654 to e3e5459 Compare January 19, 2026 15:31
@renovate renovate bot changed the title Update dependency torch to v2 [SECURITY] Update dependency torch to >=2.9,<3 [SECURITY] Jan 19, 2026
@renovate renovate bot force-pushed the renovate/pypi-torch-vulnerability branch from e3e5459 to e79ce30 Compare January 19, 2026 18:40
@renovate renovate bot changed the title Update dependency torch to >=2.9,<3 [SECURITY] Update dependency torch to v2 [SECURITY] Jan 19, 2026
@renovate renovate bot force-pushed the renovate/pypi-torch-vulnerability branch from e79ce30 to 2e2cbe3 Compare January 23, 2026 19:45
@renovate renovate bot changed the title Update dependency torch to v2 [SECURITY] Update dependency torch to >=2.10,<3 [SECURITY] Jan 23, 2026
@renovate renovate bot force-pushed the renovate/pypi-torch-vulnerability branch from 2e2cbe3 to a6dd4f3 Compare January 23, 2026 22:15
@renovate renovate bot changed the title Update dependency torch to >=2.10,<3 [SECURITY] Update dependency torch to v2 [SECURITY] Jan 23, 2026
@renovate renovate bot force-pushed the renovate/pypi-torch-vulnerability branch from a6dd4f3 to 2ae229d Compare February 2, 2026 20:05
@renovate renovate bot changed the title Update dependency torch to v2 [SECURITY] Update dependency torch to >=2.10,<3 [SECURITY] Feb 2, 2026
@renovate renovate bot force-pushed the renovate/pypi-torch-vulnerability branch from 2ae229d to bb0933a Compare February 2, 2026 22:36
@renovate renovate bot changed the title Update dependency torch to >=2.10,<3 [SECURITY] Update dependency torch to v2 [SECURITY] Feb 2, 2026
@renovate renovate bot force-pushed the renovate/pypi-torch-vulnerability branch from bb0933a to 8056c30 Compare February 12, 2026 11:39
@renovate renovate bot changed the title Update dependency torch to v2 [SECURITY] Update dependency torch to >=2.10,<3 [SECURITY] Feb 12, 2026
@renovate renovate bot force-pushed the renovate/pypi-torch-vulnerability branch from 8056c30 to c43dd7c Compare February 12, 2026 16:00
@renovate renovate bot changed the title Update dependency torch to >=2.10,<3 [SECURITY] Update dependency torch to v2 [SECURITY] Feb 12, 2026
@renovate renovate bot force-pushed the renovate/pypi-torch-vulnerability branch from c43dd7c to 82b0e9d Compare February 16, 2026 12:48
@renovate renovate bot changed the title Update dependency torch to v2 [SECURITY] Update dependency torch to >=2.10,<3 [SECURITY] Feb 16, 2026
@renovate renovate bot force-pushed the renovate/pypi-torch-vulnerability branch from 82b0e9d to 52981a4 Compare February 16, 2026 18:15
@renovate renovate bot force-pushed the renovate/pypi-torch-vulnerability branch from 52981a4 to 567f83e Compare February 17, 2026 18:46
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

0 participants